Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
J Ambient Intell Humaniz Comput ; : 1-13, 2023 May 27.
Article in English | MEDLINE | ID: covidwho-20242548

ABSTRACT

The spread of health misinformation has the potential to cause serious harm to public health, from leading to vaccine hesitancy to adoption of unproven disease treatments. In addition, it could have other effects on society such as an increase in hate speech towards ethnic groups or medical experts. To counteract the sheer amount of misinformation, there is a need to use automatic detection methods. In this paper we conduct a systematic review of the computer science literature exploring text mining techniques and machine learning methods to detect health misinformation. To organize the reviewed papers, we propose a taxonomy, examine publicly available datasets, and conduct a content-based analysis to investigate analogies and differences among Covid-19 datasets and datasets related to other health domains. Finally, we describe open challenges and conclude with future directions.

2.
Psychol Res Behav Manag ; 16: 1495-1508, 2023.
Article in English | MEDLINE | ID: covidwho-2328355

ABSTRACT

Background: Pervasive health misinformation on social media affects people's health. Fact-checking health information before it is shared is an altruistic behavior that effectively addresses health misinformation on social media. Purpose: Based on the influence of presumed media influence (IPMI), this study serves two purposes: The first is to investigate factors that influence social media users' decisions to fact-check health information before sharing it in accordance with the IPMI model. The second is to explore different predictive powers of the IPMI model for individuals with different levels of altruism. Methods: This study conducted a questionnaire survey of 1045 Chinese adults. Participants were divided into either a low-altruism group (n = 545) or a high-altruism group (n = 500) at the median value of altruism. A multigroup analysis was conducted with R Lavaan package (Version 0.6-15). Results: All of the hypotheses were supported, which confirms the applicability of the IPMI model in the context of fact-checking health information on social media before sharing. Notably, the IPMI model yielded different results for the low- and high-altruism groups. Conclusion: This study confirmed the IPMI model can be employed in the context of fact-checking health information. Paying attention to health misinformation can indirectly affect an individual's intention to fact-check health information before they share it on social media. Furthermore, this study demonstrated the IPMI model's varying predictive powers for individuals with different altruism levels and recommended specific strategies health-promotion officials can take to encourage others to fact-check health information.

3.
Healthcare (Basel) ; 11(7)2023 Mar 26.
Article in English | MEDLINE | ID: covidwho-2290838

ABSTRACT

The evolving availability of health information on social media, regardless of its credibility, raises several questions about its impact on our health decisions and social behaviors, especially during health crises and in conflict settings where compliance with preventive measures and health guidelines is already a challenge due to socioeconomic factors. For these reasons, we assessed compliance with preventive measures and investigated the role of infodemic in people's non-compliance with COVID-19 containment measures in Yemen. To this purpose and to triangulate our data collection, we executed a mixed method approach in which raw aggregated data were taken and analyzed from multiple sources (COVID-19 Government Response Tracker and Google COVID-19 Community Mobility Reports), then complemented and verified with In-depth interviews. Our results showed that the population in Yemen had relatively complied with the governmental containment measures at the beginning of the pandemic. However, containment measures were not supported by daily COVID-19 reports due to low transparency, which, together with misinformation and lack of access to reliable sources, has caused the population not to believe in COVID-19 and even practice social pressure on those who showed some compliance with the WHO guidelines. Those results indicate the importance of adopting an infodemic management approach in response to future outbreaks, particularly in conflict settings.

4.
Multimed Tools Appl ; : 1-20, 2022 Jul 28.
Article in English | MEDLINE | ID: covidwho-2236501

ABSTRACT

Research aimed at finding solutions to the problem of the diffusion of distinct forms of non-genuine information online across multiple domains has attracted growing interest in recent years, from opinion spam to fake news detection. Currently, partly due to the COVID-19 virus outbreak and the subsequent proliferation of unfounded claims and highly biased content, attention has focused on developing solutions that can automatically assess the genuineness of health information. Most of these approaches, applied both to Web pages and social media content, rely primarily on the use of handcrafted features in conjunction with Machine Learning. In this article, instead, we propose a health misinformation detection model that exploits as features the embedded representations of some structural and content characteristics of Web pages, which are obtained using an embedding model pre-trained on medical data. Such features are employed within a deep learning classification model, which categorizes genuine health information versus health misinformation. The purpose of this article is therefore to evaluate the effectiveness of the proposed model, namely Vec4Cred, with respect to the problem considered. This model represents an evolution of a previous one, with respect to which new features and architectural choices have been considered and illustrated in this work.

5.
Health Econ Policy Law ; 18(2): 204-217, 2023 04.
Article in English | MEDLINE | ID: covidwho-2221734

ABSTRACT

Health misinformation, most visibly following the COVID-19 infodemic, is an urgent threat that hinders the success of public health policies. It likely contributed, and will continue to contribute, to avoidable deaths. Policymakers around the world are being pushed to tackle this problem. Legislative acts have been rolled out or announced in many countries and at the European Union level. The goal of this paper is not to review particular legislative initiatives, or to assess the impact and efficacy of measures implemented by digital intermediaries, but to reflect on the high constitutional and ethical stakes involved in tackling health misinformation through speech regulation. Our findings suggest that solutions focused on regulating speech are likely to encounter significant constraints, as policymakers grasp with the limitations imposed by freedom of expression and ethical considerations. Solutions focused on empowering individuals - such as media literacy initiatives, fact-checking or credibility labels - are one way to avoid such hurdles.


Subject(s)
COVID-19 , Humans , European Union , Public Policy , Communication , Freedom
6.
2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering, ICECCME 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2213263

ABSTRACT

Social media use spiked amid the COVID-19 pandemic, resulting in an increase in fake news proliferation, especially health misinformation. Many misinformation detection studies have primarily focused on English texts, and of these, very few have examined linguistic features (syntactic, lexical, and semantic). Lexical features such as number of upper-case letters have been shown to improve misinformation detection in English and non-English texts, however, use of lexical features is still in its infancy, and thus warrants further investigation. Therefore, a novel lexical-based health misinformation detection model is proposed using machine learning techniques, specifically focusing on two languages, namely, English, and standard Malay. A new dataset containing fake and real news were developed from a fact- checking portal and local media, targeting news related to COVID-19. Common natural language processing tasks including filtering, tokenization, stemming etc. and lexical feature extraction were administered prior to data modelling. Evaluation on a dataset containing 1060 fake and real news each show Random Forest to yield the best performance with 99.6% for F-measure and accuracy of 96%, followed closely by Support Vector Machine. A similar observation was noted for the Malay corpus. Improved health misinformation detection was observed when linguistic features were included as part of the model, hence implying that the features can be successfully used in detecting fake news. © 2022 IEEE.

7.
2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering, ICECCME 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2213256

ABSTRACT

The rapid dissemination of misinformation (generally known as fake news) has become worrisome, especially during the on-going COVID-19 pandemic both globally, and locally. In fact, the proliferation of health-related misinformation intensified on social media, which many experts believe is contributing to the threats of the pandemic. Sentiment has been shown to improve detection mechanisms in various social media related studies, however this aspect is under-researched in the context of health misinformation. Further, metadata such as location or image that constitute part of real and fake news were not fully explored as well. This study develops a health misinformation detection model using machine learning algorithms, and further assesses the impact of sentiment and image on the model performance. Local data gathered from a fact-checking portal were pre-processed, translated, and used to train the detection model. Evaluation results show Support Vector Machine to yield the best performance with 99.4% for F-measure and accuracy of 99.1%, followed closely by Random Forest when sentiment was included, however, the presence of image was not found to significantly improve health misinformation detection. © 2022 IEEE.

8.
Int J Environ Res Public Health ; 20(1)2022 12 20.
Article in English | MEDLINE | ID: covidwho-2200053

ABSTRACT

Health misinformation about nutrition and other health aspects on social media is a current public health concern. Healthcare professionals play an essential role in efforts to detect and correct it. The present study focuses on analyzing the use of competencies associated with training in methodology, health literacy, and critical lecture in order to detect sources of health misinformation that use scientific articles to support their false information. A qualitative study was conducted between 15 and 30 January 2022, wherein the participants were recruited from active users from a nutrition conversation on Twitter, diets, and cancer and defined themselves as healthcare professionals. This study demonstrates that health literacy and critical lecture competencies allow for the detection of more misinformation messages and are associated with a high rate of responses to users that spread the misinformation messages. Finally, this study proposes the necessity of developing actions to improve health literacy and critical lecture competencies between healthcare professionals. However, in order to achieve this, health authorities must develop strategies to psychologically support those healthcare professionals faced with bullying as a result of their activity on social media debunking health hoaxes.


Subject(s)
Health Literacy , Social Media , Humans , Communication , Public Health/methods , Delivery of Health Care
9.
Int J Environ Res Public Health ; 19(23)2022 Nov 29.
Article in English | MEDLINE | ID: covidwho-2143138

ABSTRACT

The characteristics and influence of the echo chamber effect (TECE) of health misinformation diffusion on social media have been investigated by researchers, but the formation mechanism of TECE needs to be explored specifically and deeply. This research focuses on the influence of users' imitation, intergroup interaction, and reciprocity behavior on TECE based on the social contagion mechanism. A user comment-reply social network was constructed using the comments of a COVID-19 vaccine video on YouTube. The semantic similarity and Exponential Random Graph Model (ERGM) were used to calculate TECE and the effect of three interaction mechanisms on the echo chamber. The results show that there is a weak echo chamber effect (ECE) in the spread of misinformation about the COVID-19 vaccine. The imitation and intergroup interaction behavior are positively related to TECE. Reciprocity has no significant influence on TECE.


Subject(s)
COVID-19 , Social Media , Humans , COVID-19 Vaccines , Social Network Analysis , COVID-19/prevention & control , Communication
10.
JMIR Form Res ; 6(11): e38794, 2022 Nov 02.
Article in English | MEDLINE | ID: covidwho-2079985

ABSTRACT

BACKGROUND: Misinformation is often disseminated through social media, where information is spread rapidly and easily. Misinformation affects many patients' decisions to follow a treatment prescribed by health professionals (HPs). For example, chronic patients (eg, those with diabetes) may not follow their prescribed treatment plans. During the recent pandemic, misinformed people rejected COVID-19 vaccines and public health measures, such as masking and physical distancing, and used unproven treatments. OBJECTIVE: This study investigated the impact of health-threatening misinformation on the practices of health care professionals in the United Kingdom, especially during the outbreaks of diseases where a great amount of health-threatening misinformation is produced and released. The study examined the misinformation surrounding the COVID-19 outbreak to determine how it may have impacted practitioners' perceptions of misinformation and how that may have influenced their practice. In particular, this study explored the answers to the following questions: How do HPs react when they learn that a patient has been misinformed? What misinformation do they believe has the greatest impact on medical practice? What aspects of change and intervention in HPs' practice are in response to misinformation? METHODS: This research followed a qualitative approach to collect rich data from a smaller subset of health care practitioners working in the United Kingdom. Data were collected through 1-to-1 online interviews with 13 health practitioners, including junior and senior physicians and nurses in the United Kingdom. RESULTS: Research findings indicated that HPs view misinformation in different ways according to the scenario in which it occurs. Some HPs consider it to be an acute incident exacerbated by the pandemic, while others see it as an ongoing phenomenon (always present) and address it as part of their daily work. HPs are developing pathways for dealing with misinformation. Two main pathways were identified: first, to educate the patient through coaching, advising, or patronizing and, second, to devote resources, such as time and effort, to facilitate 2-way communication between the patient and the health care provider through listening and talking to them. CONCLUSIONS: HPs do not receive the confidence they deserve from patients. The lack of trust in health care practitioners has been attributed to several factors, including (1) trusting alternative sources of information (eg, social media) (2) patients' doubts about HPs' experience (eg, a junior doctor with limited experience), and (3) limited time and availability for patients, especially during the pandemic. There are 2 dimensions of trust: patient-HP trust and patient-information trust. There are 2 necessary actions to address the issue of lack of trust in these dimensions: (1) building trust and (2) maintaining trust. The main recommendations of the HPs are to listen to patients, give them more time, and seek evidence-based resources.

11.
Telematics and Informatics Reports ; : 100020, 2022.
Article in English | ScienceDirect | ID: covidwho-2061916

ABSTRACT

The prevalence of misinformation on social media during COVID-19 causes the emergence of infodemic. Despite the recognition that excessive social media use results in the dissemination of misinformation, a theoretical understanding of the relationship between social media overload and health misinformation dissemination is lacking. To fill the research gap, this study builds an integrated model to examine how social media overload affects individuals’ health misinformation dissemination by investigating the underlying mechanisms. A survey method was employed to collect the data and test the hypotheses. The results revealed that information overload and social media overload affect individuals’ health anxiety and exhaustion, which in turn, exert effects on their health misinformation dissemination. In theoretical terms, this study uncovers the mechanisms underlying the relationship between social media overload and health misinformation dissemination. In practical terms, this study provides insights on the management of social media usage.

12.
AIS Transactions on Human-Computer Interactions ; 14(2):116-149, 2022.
Article in English | ProQuest Central | ID: covidwho-1924793

ABSTRACT

Health misinformation on social media is an emerging public concern as the COVID-19 infodemic tragically evidences. Key challenges that empower health misinformation’s spread include rapidly advancing social technologies and high social media usage penetration. However, research on health misinformation on social media lacks cohesion and has received limited attention from information systems (IS) researchers. Given this issue’s importance and relevance to the IS discipline, we summarize the current state of research on this emerging topic and identify research gaps together with meaningful research questions. Following a two-step literature search, we identify and analyze 101 papers. Drawing on the Shannon-Weaver communication model, we propose an integrative stage-based framework of health misinformation on social media. Based on literature analysis, we identify research opportunities and prescribe directions for future research on health misinformation on social media.

13.
Technol Soc ; 70: 102048, 2022 Aug.
Article in English | MEDLINE | ID: covidwho-1907815

ABSTRACT

- In the ongoing COVID-19 pandemic, people spread various COVID-19-related rumors and hoaxes that negatively influence human civilization through online social networks (OSN). The proposed research addresses the unique and innovative approach to controlling COVID-19 rumors through the power of opinion leaders (OLs) in OSN. The entire process is partitioned into two phases; the first phase describes the novel Reputation-based Opinion Leader Identification (ROLI) algorithm, including a unique voting method to identify the top-T OLs in the OSN. The second phase describes the technique to measure the aggregated polarity score of each posted tweet/post and compute each user's reputation. The empirical reputation is utilized to calculate the user's trust, the post's entropy, and its veracity. If the experimental entropy of the post is lower than the empirical threshold value, the post is likely to be categorized as a rumor. The proposed approach operated on Twitter, Instagram, and Reddit social networks for validation. The ROLI algorithm provides 91% accuracy, 93% precision, 95% recall, and 94% F1-score over other Social Network Analysis (SNA) measures to find OLs in OSN. Moreover, the proposed approach's rumor controlling effectiveness and efficiency is also estimated based on three standard metrics; affected degree, represser degree, and diffuser degree, and obtained 26%, 22%, and 23% improvement, respectively. The concluding outcomes illustrate that the influence of OLs is exceptionally significant in controlling COVID-19 rumors.

14.
J Med Internet Res ; 24(6): e37623, 2022 06 20.
Article in English | MEDLINE | ID: covidwho-1879375

ABSTRACT

BACKGROUND: During global health crises such as the COVID-19 pandemic, rapid spread of misinformation on social media has occurred. The misinformation associated with COVID-19 has been analyzed, but little attention has been paid to developing a comprehensive analytical framework to study its spread on social media. OBJECTIVE: We propose an elaboration likelihood model-based theoretical model to understand the persuasion process of COVID-19-related misinformation on social media. METHODS: The proposed model incorporates the central route feature (content feature) and peripheral features (including creator authority, social proof, and emotion). The central-level COVID-19-related misinformation feature includes five topics: medical information, social issues and people's livelihoods, government response, epidemic spread, and international issues. First, we created a data set of COVID-19 pandemic-related misinformation based on fact-checking sources and a data set of posts that contained this misinformation on real-world social media. Based on the collected posts, we analyzed the dissemination patterns. RESULTS: Our data set included 11,450 misinformation posts, with medical misinformation as the largest category (n=5359, 46.80%). Moreover, the results suggest that both the least (4660/11,301, 41.24%) and most (2320/11,301, 20.53%) active users are prone to sharing misinformation. Further, posts related to international topics that have the greatest chance of producing a profound and lasting impact on social media exhibited the highest distribution depth (maximum depth=14) and width (maximum width=2355). Additionally, 97.00% (2364/2437) of the spread was characterized by radiation dissemination. CONCLUSIONS: Our proposed model and findings could help to combat the spread of misinformation by detecting suspicious users and identifying propagation characteristics.


Subject(s)
COVID-19 , Social Media , Communication , Humans , Pandemics , SARS-CoV-2
15.
2nd Workshop Reducing Online Misinformation through Credible Information Retrieval, ROMCIR 2022 ; 3138:11-26, 2022.
Article in English | Scopus | ID: covidwho-1871081

ABSTRACT

The worldwide COVID-19 pandemic has brought about a lot of changes in people's life. It also emerges as a new challenge to information search services. This is because up to now our understanding about the virus is still limited, and there is a lot of misinformation online. In such a situation, how to provide useful and correct information to the public is not straightforward. Responsibility of search engines is crucial because many people make decisions based on the information available to them. In this piece of work, we try to improve retrieval quality via the data fusion technique. Especially, a clustering-based approach is proposed for selecting a subset of systems from all available ones for finding relevant, credible, and correct documents. Experimented with a group of runs submitted to the 2020 TREC Health Misinformation Track, we demonstrate that data fusion is a very beneficial approach for this task, whether measured by some traditional metrics such as MAP or some task specific metrics such as CAM. When choosing 17 runs, which is one third of all component retrieval systems available, the linear combination method is better than the best component retrieval system by 31.42% in MAP and 21.72% in CAM. The proposed methods are also better than the state-of-the-art subset selection method by a clear margin. © 2022 Copyright @Anonymous for this paper by its authors.

16.
Front Public Health ; 9: 764681, 2021.
Article in English | MEDLINE | ID: covidwho-1662635

ABSTRACT

Social media has been crucial for seeking and communicating COVID-19 information. However, social media has also promulgated misinformation, which is particularly concerning among Asian Americans who may rely on in-language information and utilize social media platforms to connect to Asia-based networks. There is limited literature examining social media use for COVID-19 information and the subsequent impact of misinformation on health behaviors among Asian Americans. This perspective reviews recent research, news, and gray literature to examine the dissemination of COVID-19 misinformation on social media platforms to Chinese, Korean, Vietnamese, and South Asian Americans. We discuss the linkage of COVID-19 misinformation to health behaviors, with emphasis on COVID-19 vaccine misinformation and vaccine decision-making in Asian American communities. We then discuss community- and research-driven responses to investigate misinformation during the pandemic. Lastly, we propose recommendations to mitigate misinformation and address the COVID-19 infodemic among Asian Americans.


Subject(s)
COVID-19 , Social Media , Asian , COVID-19 Vaccines , Communication , Humans , SARS-CoV-2 , United States/epidemiology
17.
23rd International Conference on Information Integration and Web Intelligence, iiWAS 2021 ; : 267-277, 2021.
Article in English | Scopus | ID: covidwho-1631618

ABSTRACT

False information in the domain of online health related articles is of great concern, which can be witnessed in the current pandemic situation of Covid-19. It is markedly different from fake news in the political context as health information should be evaluated against the most recent and reliable medical resources such as scholarly repositories. However, one of the challenges with such an approach is the retrieval of the pertinent resources. In this work, we formulate a new unsupervised task of generating queries using keywords extracted from a health-related article which can be further applied to retrieve relevant authoritative and reliable medical content from scholarly repositories to assess the article's veracity. We propose a three-step approach for it and illustrate that our method is able to generate effective queries. We also curate a new dataset to aid the evaluation for this task which will be made available upon request. © 2021 ACM.

18.
Int J Drug Policy ; 99: 103470, 2022 01.
Article in English | MEDLINE | ID: covidwho-1415361

ABSTRACT

BACKGROUND: An unproven "nicotine hypothesis" that indicates nicotine's therapeutic potential for COVID-19 has been proposed in recent literature. This study is about Twitter posts that misinterpret this hypothesis to make baseless claims about benefits of smoking and vaping in the context of COVID-19. We quantify the presence of such misinformation and characterize the tweeters who post such messages. METHODS: Twitter premium API was used to download tweets (n = 17,533) that match terms indicating (a) nicotine or vaping themes, (b) a prophylactic or therapeutic effect, and (c) COVID-19 (January-July 2020) as a conjunctive query. A constraint on the length of the span of text containing the terms in the tweets allowed us to focus on those that convey the therapeutic intent. We hand-annotated these filtered tweets and built a classifier that identifies tweets that extrapolate the nicotine hypothesis to smoking/vaping with a positive predictive value of 85%. We analyzed the frequently used terms in author bios, top Web links, and hashtags of such tweets. RESULTS: 21% of our filtered COVID-19 tweets indicate a vaping or smoking-based prevention/treatment narrative. Qualitative analyses show a variety of ways therapeutic claims are being made and tweeter bios reveal pre-existing notions of positive stances toward vaping. CONCLUSION: The social media landscape is a double-edged sword in tobacco communication. Although it increases information reach, consumers can also be subject to confirmation bias when exposed to inadvertent or deliberate framing of scientific discourse that may border on misinformation. This calls for circumspection and additional planning in countering such narratives as the COVID-19 pandemic continues to ravage our world. Our results also serve as a cautionary tale in how social media can be leveraged to spread misleading information about tobacco products in the wake of pandemics.


Subject(s)
COVID-19 , Social Media , Humans , Nicotine , Pandemics , SARS-CoV-2
19.
Int J Environ Res Public Health ; 18(11)2021 05 21.
Article in English | MEDLINE | ID: covidwho-1243989

ABSTRACT

While the coronavirus 2019 (COVID-19) pandemic is spreading all over the world, misinformation, without prudent journalistic judgments of media content online, has begun circulating rapidly and influencing public opinion on social media. This quantitative study intends to advance the previous misinformation research by proposing and examining a theoretical model following an "influence of presumed influence" perspective. Two survey studies were conducted on participants located in the United States (N = 1793) and China (N = 504), respectively, to test the applicability of the influence of presumed influence theory. Results indicated that anger and anxiety significantly predicted perceived influence of misinformation on others; presumed influence on others positively affected public support in corrective and restrictive actions in both U.S. and China. Further, anger toward misinformation led to public willingness to self-correct in the U.S. and China. In contrast, anxiety only took effects in facilitating public support for restrictive actions in the U.S. This study conducted survey research in China and the U.S. to expand the influence of presumed influence (IPI) hypothesis to digital misinformation in both Western and non-Western contexts. This research provides implications for social media companies and policy makers to combat misinformation online.


Subject(s)
COVID-19 , Social Media , China , Communication , Global Health , Humans , SARS-CoV-2 , United States
20.
Risk Manag Healthc Policy ; 14: 1869-1879, 2021.
Article in English | MEDLINE | ID: covidwho-1232504

ABSTRACT

BACKGROUND: During a public health emergency, social media is a major conduit or vector for spreading health misinformation. Understanding the characteristics of health misinformation can be a premise for rebuking and purposefully correcting such misinformation on social media. METHODS: Using samples of China's misinformation on social media related to the COVID-19 outbreak (N=547), the objective of this article was to illustrate the characteristics of said misinformation on social media in China by descriptive analysis, including the typology, the most-mentioned information, and a developmental timeline. RESULTS: The results reveal that misinformation related to preventive and therapeutic methods is the most-mentioned type. Other types of misinformation associated with people's daily lives are also widespread. Moreover, cultural and social beliefs have an impact on the perception and propaganda of misinformation, and changes in the crisis situation are relevant to the type variance of misinformation. CONCLUSION: Following research results, strategies of health communication for managing misinformation on social media are given, such as credible sources and expert sources. Also, traditional beliefs or perceptions play the vital role in health communication. To sum up, combating misinformation on social media is likely not a single effort to correct misinformation or to prevent its spread. Instead, scholars, journalists, educators, and citizens must collaboratively identify and correct any misinformation.

SELECTION OF CITATIONS
SEARCH DETAIL